2025-07-28 10:01:29,100 [ 204093 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-07-28 10:01:29,101 [ 204093 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:79, check_args_and_update_paths) 2025-07-28 10:01:29,101 [ 204093 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:90, check_args_and_update_paths) 2025-07-28 10:01:29,101 [ 204093 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:92, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_gezf6p --privileged --dns-search='.' --memory=30709022720 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=5ccda723c1fc -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=d862517635bf -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=1 --color=no --durations=0 --report-log=parallel0_1.jsonl --report-log-exclude-logs-on-passed-tests test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk]' 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume]' -vvv " altinityinfra/integration-tests-runner:226bfaf75ac1 '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache Test order randomisation NOT enabled. Enable with --random-order or --random-order-bucket= rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: timeout-2.3.1, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0, random-order-1.1.1 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [3 items] scheduling tests via LoadFileScheduling test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup [gw5] [ 33%] FAILED test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup [gw1] [ 66%] FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] [gw1] [100%] FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] =================================== FAILURES =================================== _________________________ test_shutdown_cancels_backup _________________________ [gw5] linux -- Python 3.10.12 /usr/bin/python3 def test_shutdown_cancels_backup(): > with NoTrashChecker() as no_trash_checker: test_backup_restore_on_cluster/test_cancel_backup.py:556: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = type = None, value = None, traceback = None def __exit__(self, type, value, traceback): list_of_znodes = set( node1.query( "SELECT name FROM system.zookeeper WHERE path = '/clickhouse/backups' " + "AND NOT (name == 'alive_tracker')" ).splitlines() ) new_znodes = list_of_znodes.difference(self.__previous_list_of_znodes) if new_znodes: print(f"Found nodes in ZooKeeper: {new_znodes}") for node in new_znodes: print( f"Nodes in '/clickhouse/backups/{node}':\n" + node1.query( f"SELECT name FROM system.zookeeper WHERE path = '/clickhouse/backups/{node}'" ) ) print( f"Nodes in '/clickhouse/backups/{node}/stage':\n" + node1.query( f"SELECT name FROM system.zookeeper WHERE path = '/clickhouse/backups/{node}/stage'" ) ) if self.check_zookeeper: assert new_znodes == set() list_of_backups = set( os.listdir(os.path.join(node1.cluster.instances_dir, "backups")) ) new_backups = list_of_backups.difference(self.__previous_list_of_backups) unfinished_backups = set( backup for backup in new_backups if not os.path.exists( os.path.join(node1.cluster.instances_dir, "backups", backup, ".backup") ) ) new_backups = set( backup for backup in new_backups if backup not in unfinished_backups ) if new_backups: print(f"Found new backups: {new_backups}") if unfinished_backups: print(f"Found unfinished backups: {unfinished_backups}") assert new_backups == set(self.expect_backups) assert unfinished_backups.difference(self.allow_unfinished_backups) == set() all_errors = set() start_time = time.strftime( "%Y-%m-%d %H:%M:%S", self.__start_time_for_collecting_errors ) for node in nodes: errors_query_result = node.query( "SELECT name FROM system.errors WHERE last_error_time >= toDateTime('" + start_time + "') " + "AND NOT ((name == 'KEEPER_EXCEPTION') AND (last_error_message LIKE '%Fault injection%')) " + "AND NOT (name == 'NO_ELEMENTS_IN_CONFIG')" ) errors = errors_query_result.splitlines() if errors: print(f"{get_node_name(node)}: Found errors: {errors}") print( node.query( "SELECT name, last_error_message FROM system.errors WHERE last_error_time >= toDateTime('" + start_time + "')" ) ) for error in errors: > assert (error in self.expect_errors) or (error in self.allow_errors) E AssertionError: assert ('NETLINK_ERROR' in ['QUERY_WAS_CANCELLED'] or 'NETLINK_ERROR' in []) E + where ['QUERY_WAS_CANCELLED'] = .expect_errors E + and [] = .allow_errors test_backup_restore_on_cluster/test_cancel_backup.py:394: AssertionError ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ------------------------------ Captured log setup ------------------------------ 2025-07-28 10:01:39.204000 [ 616 ] DEBUG : Command:[docker ps | wc -l] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.234000 [ 616 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-07-28 10:01:39.234000 [ 616 ] DEBUG : No running containers (conftest.py:95, cleanup_environment) 2025-07-28 10:01:39.235000 [ 616 ] DEBUG : Pruning Docker networks (conftest.py:97, cleanup_environment) 2025-07-28 10:01:39.235000 [ 616 ] DEBUG : Command:[docker network prune --force] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.264000 [ 616 ] DEBUG : Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.267000 [ 616 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:145, run_and_check) 2025-07-28 10:01:39.268000 [ 616 ] INFO : Running tests in /ClickHouse/tests/integration/test_backup_restore_on_cluster/test_cancel_backup.py (cluster.py:2738, start) 2025-07-28 10:01:39.269000 [ 616 ] DEBUG : Cluster start called. is_up=False (cluster.py:2745, start) 2025-07-28 10:01:39.296000 [ 616 ] DEBUG : Docker networks for project roottestbackuprestoreonclustercancelbackup-gw5 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-07-28 10:01:39.329000 [ 616 ] DEBUG : Docker containers for project roottestbackuprestoreonclustercancelbackup-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-07-28 10:01:39.357000 [ 616 ] DEBUG : Docker volumes for project roottestbackuprestoreonclustercancelbackup-gw5 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-07-28 10:01:39.358000 [ 616 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-07-28 10:01:39.387000 [ 616 ] DEBUG : Docker networks for project roottestbackuprestoreonclustercancelbackup-gw5 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-07-28 10:01:39.417000 [ 616 ] DEBUG : Docker containers for project roottestbackuprestoreonclustercancelbackup-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-07-28 10:01:39.448000 [ 616 ] DEBUG : Docker volumes for project roottestbackuprestoreonclustercancelbackup-gw5 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-07-28 10:01:39.448000 [ 616 ] DEBUG : Command:[docker container list --all --filter name='^/roottestbackuprestoreonclustercancelbackup-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.478000 [ 616 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-07-28 10:01:39.478000 [ 616 ] DEBUG : No running containers for project: roottestbackuprestoreonclustercancelbackup-gw5 (cluster.py:879, cleanup) 2025-07-28 10:01:39.479000 [ 616 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-07-28 10:01:39.508000 [ 616 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-07-28 10:01:39.508000 [ 616 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.546000 [ 616 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-07-28 10:01:39.546000 [ 616 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-07-28 10:01:39.546000 [ 616 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-07-28 10:01:39.547000 [ 616 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.576000 [ 616 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-07-28 10:01:39.577000 [ 616 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) 2025-07-28 10:01:39.577000 [ 616 ] DEBUG : Setup directory for instance: node1 (cluster.py:2758, start) 2025-07-28 10:01:39.578000 [ 616 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4628, create_dir) 2025-07-28 10:01:39.578000 [ 616 ] DEBUG : Create directory for common tests configuration (cluster.py:4633, create_dir) 2025-07-28 10:01:39.578000 [ 616 ] DEBUG : Copy common configuration from helpers (cluster.py:4653, create_dir) 2025-07-28 10:01:39.579000 [ 616 ] DEBUG : Generate and write macros file (cluster.py:4705, create_dir) 2025-07-28 10:01:39.580000 [ 616 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_backup_restore_on_cluster/configs/backups_disk.xml', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/configs/lesser_timeouts.xml', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/configs/slow_backups.xml', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/configs/shutdown_cancel_backups.xml'] to /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node1/configs/config.d (cluster.py:4741, create_dir) 2025-07-28 10:01:39.583000 [ 616 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node1/database (cluster.py:4758, create_dir) 2025-07-28 10:01:39.584000 [ 616 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node1/logs (cluster.py:4769, create_dir) 2025-07-28 10:01:39.584000 [ 616 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" (cluster.py:4850, create_dir) 2025-07-28 10:01:39.584000 [ 616 ] INFO : external_dir_abs_path=/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/backups (cluster.py:4879, create_dir) 2025-07-28 10:01:39.584000 [ 616 ] DEBUG : Setup directory for instance: node2 (cluster.py:2758, start) 2025-07-28 10:01:39.585000 [ 616 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4628, create_dir) 2025-07-28 10:01:39.586000 [ 616 ] DEBUG : Create directory for common tests configuration (cluster.py:4633, create_dir) 2025-07-28 10:01:39.586000 [ 616 ] DEBUG : Copy common configuration from helpers (cluster.py:4653, create_dir) 2025-07-28 10:01:39.587000 [ 616 ] DEBUG : Generate and write macros file (cluster.py:4705, create_dir) 2025-07-28 10:01:39.587000 [ 616 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_backup_restore_on_cluster/configs/backups_disk.xml', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/configs/lesser_timeouts.xml', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/configs/slow_backups.xml', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/configs/shutdown_cancel_backups.xml'] to /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node2/configs/config.d (cluster.py:4741, create_dir) 2025-07-28 10:01:39.589000 [ 616 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node2/database (cluster.py:4758, create_dir) 2025-07-28 10:01:39.589000 [ 616 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node2/logs (cluster.py:4769, create_dir) 2025-07-28 10:01:39.589000 [ 616 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" (cluster.py:4850, create_dir) 2025-07-28 10:01:39.589000 [ 616 ] INFO : external_dir_abs_path=/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/backups (cluster.py:4879, create_dir) 2025-07-28 10:01:39.589000 [ 616 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:5ccda723c1fc', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/.env (cluster.py:96, _create_env_file) 2025-07-28 10:01:39.590000 [ 616 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-07-28 10:01:39.590000 [ 616 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-07-28 10:01:39.591000 [ 616 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-07-28 10:01:39.591000 [ 616 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-07-28 10:01:39.605000 [ 616 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-07-28 10:01:39.606000 [ 616 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/.env --project-name roottestbackuprestoreonclustercancelbackup-gw5 --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node2/docker-compose.yml pull] (cluster.py:121, run_and_check) 2025-07-28 10:01:50.088000 [ 616 ] DEBUG : Stderr: zoo3 Skipped - Image is already being pulled by zoo2 (cluster.py:147, run_and_check) 2025-07-28 10:01:50.088000 [ 616 ] DEBUG : Stderr: node2 Skipped - Image is already being pulled by zoo2 (cluster.py:147, run_and_check) 2025-07-28 10:01:50.089000 [ 616 ] DEBUG : Stderr: node1 Skipped - Image is already being pulled by zoo2 (cluster.py:147, run_and_check) 2025-07-28 10:01:50.089000 [ 616 ] DEBUG : Stderr: zoo1 Skipped - Image is already being pulled by zoo2 (cluster.py:147, run_and_check) 2025-07-28 10:01:50.089000 [ 616 ] DEBUG : Stderr: zoo2 Pulling (cluster.py:147, run_and_check) 2025-07-28 10:01:50.089000 [ 616 ] DEBUG : Stderr: zoo2 Pulled (cluster.py:147, run_and_check) 2025-07-28 10:01:50.090000 [ 616 ] DEBUG : Setup ZooKeeper (cluster.py:2799, start) 2025-07-28 10:01:50.090000 [ 616 ] DEBUG : Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper1/log', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper1/config', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper1/coordination', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper2/log', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper2/config', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper2/coordination', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper3/log', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper3/config', '/ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/keeper3/coordination'] (cluster.py:2800, start) 2025-07-28 10:01:50.093000 [ 616 ] DEBUG : Command:[docker compose --project-name roottestbackuprestoreonclustercancelbackup-gw5 --env-file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] (cluster.py:121, run_and_check) 2025-07-28 10:01:51.097000 [ 616 ] DEBUG : Stderr:time="2025-07-28T10:01:50Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:147, run_and_check) 2025-07-28 10:01:51.097000 [ 616 ] DEBUG : Stderr: Network roottestbackuprestoreonclustercancelbackup-gw5_default Creating (cluster.py:147, run_and_check) 2025-07-28 10:01:51.097000 [ 616 ] DEBUG : Stderr: Network roottestbackuprestoreonclustercancelbackup-gw5_default Created (cluster.py:147, run_and_check) 2025-07-28 10:01:51.098000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Creating (cluster.py:147, run_and_check) 2025-07-28 10:01:51.098000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Creating (cluster.py:147, run_and_check) 2025-07-28 10:01:51.098000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Creating (cluster.py:147, run_and_check) 2025-07-28 10:01:51.098000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Created (cluster.py:147, run_and_check) 2025-07-28 10:01:51.098000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Created (cluster.py:147, run_and_check) 2025-07-28 10:01:51.098000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Created (cluster.py:147, run_and_check) 2025-07-28 10:01:51.098000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Starting (cluster.py:147, run_and_check) 2025-07-28 10:01:51.099000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Starting (cluster.py:147, run_and_check) 2025-07-28 10:01:51.099000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Starting (cluster.py:147, run_and_check) 2025-07-28 10:01:51.099000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Started (cluster.py:147, run_and_check) 2025-07-28 10:01:51.099000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Started (cluster.py:147, run_and_check) 2025-07-28 10:01:51.099000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Started (cluster.py:147, run_and_check) 2025-07-28 10:01:51.100000 [ 616 ] DEBUG : Stderr:time="2025-07-28T10:01:51Z" level=debug msg="otel error" error="" (cluster.py:147, run_and_check) 2025-07-28 10:01:51.100000 [ 616 ] DEBUG : Stderr:time="2025-07-28T10:01:51Z" level=debug msg="otel error" error="" (cluster.py:147, run_and_check) 2025-07-28 10:01:51.100000 [ 616 ] DEBUG : Wait ZooKeeper to start (cluster.py:2436, wait_zookeeper_to_start) 2025-07-28 10:01:51.100000 [ 616 ] DEBUG : get_instance_ip instance_name=zoo1 (cluster.py:2005, get_instance_ip) 2025-07-28 10:01:51.103000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.104000 [ 616 ] DEBUG : get_kazoo_client: zoo1, ip:172.16.2.3, port:2181, use_ssl:False (cluster.py:3312, get_kazoo_client) 2025-07-28 10:01:51.106000 [ 616 ] INFO : Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False (connection.py:650, _connect) 2025-07-28 10:01:51.107000 [ 616 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-07-28 10:01:51.170000 [ 616 ] INFO : Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False (connection.py:650, _connect) 2025-07-28 10:01:51.171000 [ 616 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-07-28 10:01:51.284000 [ 616 ] INFO : Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False (connection.py:650, _connect) 2025-07-28 10:01:51.285000 [ 616 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-07-28 10:01:51.511000 [ 616 ] INFO : Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False (connection.py:650, _connect) 2025-07-28 10:01:51.512000 [ 616 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-07-28 10:01:51.862000 [ 616 ] INFO : Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False (connection.py:650, _connect) 2025-07-28 10:01:51.863000 [ 616 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-07-28 10:01:52.733000 [ 616 ] INFO : Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False (connection.py:650, _connect) 2025-07-28 10:01:52.734000 [ 616 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-07-28 10:01:54.098000 [ 616 ] INFO : Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False (connection.py:650, _connect) 2025-07-28 10:01:54.099000 [ 616 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-07-28 10:01:57.071000 [ 616 ] INFO : Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False (connection.py:650, _connect) 2025-07-28 10:01:57.071000 [ 616 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-07-28 10:01:57.077000 [ 616 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-07-28 10:01:57.078000 [ 616 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-07-28 10:01:57.079000 [ 616 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-07-28 10:01:57.080000 [ 616 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-07-28 10:01:57.085000 [ 616 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-07-28 10:01:57.086000 [ 616 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-07-28 10:01:57.086000 [ 616 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-07-28 10:01:57.150000 [ 616 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-07-28 10:01:57.150000 [ 616 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-07-28 10:01:57.151000 [ 616 ] DEBUG : get_instance_ip instance_name=zoo2 (cluster.py:2005, get_instance_ip) 2025-07-28 10:01:57.154000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:57.155000 [ 616 ] DEBUG : get_kazoo_client: zoo2, ip:172.16.2.2, port:2181, use_ssl:False (cluster.py:3312, get_kazoo_client) 2025-07-28 10:01:57.156000 [ 616 ] INFO : Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False (connection.py:650, _connect) 2025-07-28 10:01:57.157000 [ 616 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-07-28 10:01:57.166000 [ 616 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-07-28 10:01:57.167000 [ 616 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-07-28 10:01:57.168000 [ 616 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-07-28 10:01:57.169000 [ 616 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-07-28 10:01:57.174000 [ 616 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-07-28 10:01:57.174000 [ 616 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-07-28 10:01:57.174000 [ 616 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-07-28 10:01:57.275000 [ 616 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-07-28 10:01:57.276000 [ 616 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-07-28 10:01:57.276000 [ 616 ] DEBUG : get_instance_ip instance_name=zoo3 (cluster.py:2005, get_instance_ip) 2025-07-28 10:01:57.279000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:57.280000 [ 616 ] DEBUG : get_kazoo_client: zoo3, ip:172.16.2.4, port:2181, use_ssl:False (cluster.py:3312, get_kazoo_client) 2025-07-28 10:01:57.282000 [ 616 ] INFO : Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False (connection.py:650, _connect) 2025-07-28 10:01:57.283000 [ 616 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-07-28 10:01:57.291000 [ 616 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-07-28 10:01:57.291000 [ 616 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-07-28 10:01:57.292000 [ 616 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-07-28 10:01:57.293000 [ 616 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-07-28 10:01:57.298000 [ 616 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-07-28 10:01:57.299000 [ 616 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-07-28 10:01:57.299000 [ 616 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-07-28 10:01:57.395000 [ 616 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-07-28 10:01:57.395000 [ 616 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-07-28 10:01:57.396000 [ 616 ] DEBUG : All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') (cluster.py:2452, wait_zookeeper_nodes_to_start) 2025-07-28 10:01:57.396000 [ 616 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/.env --project-name roottestbackuprestoreonclustercancelbackup-gw5 --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node2/docker-compose.yml up -d --no-recreate') (cluster.py:3139, start) 2025-07-28 10:01:57.396000 [ 616 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/.env --project-name roottestbackuprestoreonclustercancelbackup-gw5 --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node2/docker-compose.yml up -d --no-recreate] (cluster.py:121, run_and_check) 2025-07-28 10:01:58.011000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Running (cluster.py:147, run_and_check) 2025-07-28 10:01:58.011000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Running (cluster.py:147, run_and_check) 2025-07-28 10:01:58.011000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Running (cluster.py:147, run_and_check) 2025-07-28 10:01:58.011000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node1-1 Creating (cluster.py:147, run_and_check) 2025-07-28 10:01:58.011000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node2-1 Creating (cluster.py:147, run_and_check) 2025-07-28 10:01:58.012000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node1-1 Created (cluster.py:147, run_and_check) 2025-07-28 10:01:58.012000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node2-1 Created (cluster.py:147, run_and_check) 2025-07-28 10:01:58.012000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node1-1 Starting (cluster.py:147, run_and_check) 2025-07-28 10:01:58.012000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node2-1 Starting (cluster.py:147, run_and_check) 2025-07-28 10:01:58.012000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node1-1 Started (cluster.py:147, run_and_check) 2025-07-28 10:01:58.012000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node2-1 Started (cluster.py:147, run_and_check) 2025-07-28 10:01:58.012000 [ 616 ] DEBUG : ClickHouse instance created (cluster.py:3147, start) 2025-07-28 10:01:58.012000 [ 616 ] DEBUG : get_instance_ip instance_name=node1 (cluster.py:2005, get_instance_ip) 2025-07-28 10:01:58.015000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw5-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.016000 [ 616 ] DEBUG : get_instance_ip instance_name=node1 (cluster.py:2015, get_instance_global_ipv6) 2025-07-28 10:01:58.019000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw5-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.019000 [ 616 ] DEBUG : Waiting for ClickHouse start in node1, ip: 172.16.2.5... (cluster.py:3155, start) 2025-07-28 10:01:58.022000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw5-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.025000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.129000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.234000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.338000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.443000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.548000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.653000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.758000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.862000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:58.966000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:59.070000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:59.174000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:59.279000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:59.384000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:59.488000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:59.593000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:59.697000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:59.801000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:59.905000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:02:00.010000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:02:00.114000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:02:00.218000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:02:00.323000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:02:00.427000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/e01ac8e9e85344c269f4cec95c90e9db5d9cc925237fbc3bf28b0b03c4636eaf/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:02:00.428000 [ 616 ] DEBUG : ClickHouse node1 started (cluster.py:3159, start) 2025-07-28 10:02:00.428000 [ 616 ] DEBUG : get_instance_ip instance_name=node2 (cluster.py:2005, get_instance_ip) 2025-07-28 10:02:00.431000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw5-node2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:02:00.432000 [ 616 ] DEBUG : get_instance_ip instance_name=node2 (cluster.py:2015, get_instance_global_ipv6) 2025-07-28 10:02:00.435000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw5-node2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:02:00.436000 [ 616 ] DEBUG : Waiting for ClickHouse start in node2, ip: 172.16.2.6... (cluster.py:3155, start) 2025-07-28 10:02:00.438000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw5-node2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:02:00.441000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/containers/875bd168210912bbf826da3216a1432f8161c7a191636763313dfa061cc81c3a/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:02:00.442000 [ 616 ] DEBUG : ClickHouse node2 started (cluster.py:3159, start) ----------------------------- Captured stdout call ----------------------------- Using node1 as initiator Sleeping 0.17636306448875205 seconds Waiting for number of system processes = 1+ Got 1 system processes for backup c50171435ebf41dd9af29708b6536513 after waiting 0 seconds node2: Restarting... node2: Restarted Waiting for number of system processes = 0 Got 0 system processes for backup c50171435ebf41dd9af29708b6536513 after waiting 0 seconds node1: Found errors: ['QUERY_WAS_CANCELLED'] QUERY_WAS_CANCELLED Got error from host node2:9000. DB::Exception: Query was cancelled. Stack trace:\n\n0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x00000000382e5051\n1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bd54ed1\n2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000c38e20b\n3. DB::Exception::Exception<>(int, FormatStringHelperImpl<>) @ 0x000000000c3a87f2\n4. ./build_docker/./src/Interpreters/ProcessList.cpp:567: DB::QueryStatus::throwQueryWasCancelled() const @ 0x000000002a684282\n5. ./build_docker/./src/Interpreters/ProcessList.cpp:520: DB::QueryStatus::throwProperExceptionIfNeeded(unsigned long const&, unsigned long const&) @ 0x000000002a68402b\n6. ./build_docker/./src/Interpreters/ProcessList.cpp:557: DB::QueryStatus::checkTimeLimit() @ 0x000000002a685017\n7. ./build_docker/./src/Backups/BackupEntriesCollector.cpp:188: DB::BackupEntriesCollector::setStage(String const&, String const&) @ 0x00000000274d1256\n8. ./build_docker/./src/Backups/BackupEntriesCollector.cpp:218: DB::BackupEntriesCollector::gatherMetadataAndCheckConsistency() @ 0x00000000274cc76a\n9. ./build_docker/./src/Backups/BackupEntriesCollector.cpp:147: DB::BackupEntriesCollector::run() @ 0x00000000274ca663\n10. ./build_docker/./src/Backups/BackupsWorker.cpp:582: DB::BackupsWorker::doBackup(std::shared_ptr, std::shared_ptr const&, String const&, DB::BackupSettings const&, std::shared_ptr, std::shared_ptr, std::shared_ptr const&, bool, std::shared_ptr const&) @ 0x000000002753d8e5\n11. ./build_docker/./src/Backups/BackupsWorker.cpp:418: DB::BackupsWorker::BackupStarter::doBackup() @ 0x0000000027558e51\n12. ./build_docker/./src/Backups/BackupsWorker.cpp:488: void std::__function::__policy_invoker::__call_impl[abi:ne190107] const&, std::shared_ptr const&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000002754e2b3\n13. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000026b1f31a\n14. ./contrib/llvm-project/libcxx/include/future:1589: std::packaged_task::operator()() @ 0x0000000026b1f90c\n15. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c024432\n16. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:117: ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImpl>::ThreadFromThreadPool::*)(), ThreadPoolImpl>::ThreadFromThreadPool*>(void (ThreadPoolImpl>::ThreadFromThreadPool::*&&)(), ThreadPoolImpl>::ThreadFromThreadPool*&&)::\'lambda\'()::operator()() @ 0x000000001c032383\n17. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c01ec11\n18. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:117: void* std::__thread_proxy[abi:ne190107]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>>(void*) @ 0x000000001c02d070\n19. asan_thread_start(void*) @ 0x000000000c340e77\n20. ? @ 0x00007f00e5be0ac3\n21. ? @ 0x00007f00e5c72850\n\nJob\'s origin stack trace:\n0. ./build_docker/./src/Common/StackTrace.cpp:386: StackTrace::StackTrace() @ 0x000000001bebb547\n1. ./build_docker/./src/Common/ThreadPool.cpp:130: void boost::heap::priority_queue, boost::parameter::void_, boost::parameter::void_, boost::parameter::void_>::emplace, Priority&, StrongTypedef&, DB::OpenTelemetry::TracingContextOnThread const, bool&, (anonymous namespace)::ScopedDecrement>(std::function&&, Priority&, StrongTypedef&, DB::OpenTelemetry::TracingContextOnThread const&&, bool&, (anonymous namespace)::ScopedDecrement&&) @ 0x000000001c01d1c4\n2. ./build_docker/./src/Common/ThreadPool.cpp:401: void ThreadPoolImpl>::scheduleImpl(std::function, Priority, std::optional, bool) @ 0x000000001c027eca\n3. ./build_docker/./src/Common/ThreadPool.cpp:494: ThreadPoolImpl>::scheduleOrThrowOnError(std::function, Priority) @ 0x000000001c0272f7\n4. ./src/Common/threadPoolCallbackRunner.h:52: std::function (std::function&&, Priority)> DB::threadPoolCallbackRunnerUnsafe>(ThreadPoolImpl>&, String const&)::\'lambda\'(std::function&&, Priority)::operator()(std::function&&, Priority) @ 0x0000000026b1dcfc\n5. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:149: std::future std::__function::__policy_invoker (std::function&&, Priority)>::__call_impl[abi:ne190107] (std::function&&, Priority)> DB::threadPoolCallbackRunnerUnsafe>(ThreadPoolImpl>&, String const&)::\'lambda\'(std::function&&, Priority), std::future (std::function&&, Priority)>>(std::__function::__policy_storage const*, std::function&&, Priority&&) @ 0x0000000026b1d854\n6. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000027539e52\n7. ./build_docker/./src/Backups/BackupsWorker.cpp:334: DB::BackupsWorker::start(std::shared_ptr const&, std::shared_ptr) @ 0x000000002753998a\n8. ./build_docker/./src/Interpreters/InterpreterBackupQuery.cpp:44: DB::InterpreterBackupQuery::execute() @ 0x000000002acdcdfe\n9. ./build_docker/./src/Interpreters/executeQuery.cpp:1457: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*, std::shared_ptr&) @ 0x000000002abdbfa3\n10. ./build_docker/./src/Interpreters/executeQuery.cpp:1761: DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::shared_ptr, std::function, DB::QueryFlags, std::optional const&, std::function const&, std::optional const&)>) @ 0x000000002abe393f\n11. ./build_docker/./src/Interpreters/DDLWorker.cpp:510: DB::DDLWorker::tryExecuteQuery(DB::DDLTaskBase&, std::shared_ptr const&, bool) @ 0x00000000297682f2\n12. ./build_docker/./src/Interpreters/DDLWorker.cpp:675: DB::DDLWorker::processTask(DB::DDLTaskBase&, std::shared_ptr const&, bool) @ 0x0000000029763fb9\n13. ./build_docker/./src/Interpreters/DDLWorker.cpp:453: DB::DDLWorker::scheduleTasks(bool) @ 0x000000002975f5d9\n14. ./build_docker/./src/Interpreters/DDLWorker.cpp:1203: DB::DDLWorker::runMainThread() @ 0x00000000297537ac\n15. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:117: ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImpl(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::\'lambda\'()::operator()() @ 0x0000000029786f23\n16. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c01ec11\n17. ./contrib/llvm-project/libcxx/include/__type_traits/invoke.h:117: void* std::__thread_proxy[abi:ne190107]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>>(void*) @ 0x000000001c02d070\n18. asan_thread_start(void*) @ 0x000000000c340e77\n19. ? @ 0x00007f00e5be0ac3\n20. ? @ 0x00007f00e5c72850\n node2: Found errors: ['NETLINK_ERROR'] NO_ELEMENTS_IN_CONFIG Certificate file is not set. NETLINK_ERROR Can\'t receive Netlink response: error -2 ------------------------------ Captured log call ------------------------------- 2025-07-28 10:02:01.447000 [ 616 ] DEBUG : Executing query SELECT name FROM system.zookeeper WHERE path = '/clickhouse/backups' AND NOT (name == 'alive_tracker') on node1 (cluster.py:3648, query) 2025-07-28 10:02:01.865000 [ 616 ] DEBUG : Executing query CREATE TABLE tbl ON CLUSTER 'cluster' (x UInt64) ENGINE=ReplicatedMergeTree('/clickhouse/tables/tbl/', '{replica}') ORDER BY tuple() PARTITION BY x%10 on node2 (cluster.py:3648, query) 2025-07-28 10:02:02.382000 [ 616 ] DEBUG : Executing query INSERT INTO tbl SELECT number FROM numbers(10) on node2 (cluster.py:3648, query) 2025-07-28 10:02:03.101000 [ 616 ] DEBUG : Executing query BACKUP TABLE tbl ON CLUSTER 'cluster' TO Disk('backups', 'c50171435ebf41dd9af29708b6536513') SETTINGS id='c50171435ebf41dd9af29708b6536513' ASYNC on node1 (cluster.py:3648, query) 2025-07-28 10:02:03.468000 [ 616 ] DEBUG : Executing query SELECT status FROM system.backups WHERE id='c50171435ebf41dd9af29708b6536513' on node1 (cluster.py:3648, query) 2025-07-28 10:02:03.785000 [ 616 ] DEBUG : Executing query SELECT count() FROM system.processes WHERE (query_kind='Backup') AND (query LIKE '%c50171435ebf41dd9af29708b6536513%') on node1 (cluster.py:3648, query) 2025-07-28 10:02:04.380000 [ 616 ] DEBUG : Executing query SELECT count() FROM system.processes WHERE (query_kind='Backup') AND (query LIKE '%c50171435ebf41dd9af29708b6536513%') on node2 (cluster.py:3648, query) 2025-07-28 10:02:04.798000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:04.798000 [ 616 ] DEBUG : Command:[docker exec -u root roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps -C clickhouse] (cluster.py:121, run_and_check) 2025-07-28 10:02:04.869000 [ 616 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:145, run_and_check) 2025-07-28 10:02:04.869000 [ 616 ] DEBUG : Stdout: 8 ? 00:00:03 clickhouse (cluster.py:145, run_and_check) 2025-07-28 10:02:04.869000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:04.870000 [ 616 ] DEBUG : Command:[docker exec -u root roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c pkill clickhouse] (cluster.py:121, run_and_check) 2025-07-28 10:02:04.943000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:04.943000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:05.008000 [ 616 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-07-28 10:02:06.009000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:06.010000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:06.084000 [ 616 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-07-28 10:02:07.086000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:07.086000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:07.165000 [ 616 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-07-28 10:02:08.167000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:08.167000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:08.246000 [ 616 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-07-28 10:02:09.248000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:09.248000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:09.329000 [ 616 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-07-28 10:02:10.329000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:10.330000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:10.401000 [ 616 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-07-28 10:02:11.402000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:11.403000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:11.487000 [ 616 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-07-28 10:02:12.488000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:12.489000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:12.560000 [ 616 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-07-28 10:02:13.561000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:13.562000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:13.640000 [ 616 ] DEBUG : Stdout:8 (cluster.py:145, run_and_check) 2025-07-28 10:02:14.641000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:14.642000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:14.726000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:14.727000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:14.805000 [ 616 ] DEBUG : No clickhouse process running. Start new one. (cluster.py:4014, start_clickhouse) 2025-07-28 10:02:14.809000 [ 616 ] DEBUG : http://localhost:None "POST /v1.46/containers/roottestbackuprestoreonclustercancelbackup-gw5-node2-1/exec HTTP/1.1" 201 74 (connectionpool.py:547, _make_request) 2025-07-28 10:02:14.853000 [ 616 ] DEBUG : http://localhost:None "POST /v1.46/exec/63a7aa53e7f98213bb7308ceec13bbf4044e01696e45f53f00f1e634ac133746/start HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-07-28 10:02:14.857000 [ 616 ] DEBUG : http://localhost:None "GET /v1.46/exec/63a7aa53e7f98213bb7308ceec13bbf4044e01696e45f53f00f1e634ac133746/json HTTP/1.1" 200 585 (connectionpool.py:547, _make_request) 2025-07-28 10:02:15.858000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:15.859000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:15.933000 [ 616 ] DEBUG : Stdout:839 (cluster.py:145, run_and_check) 2025-07-28 10:02:15.934000 [ 616 ] DEBUG : Clickhouse process running. (cluster.py:4028, start_clickhouse) 2025-07-28 10:02:15.934000 [ 616 ] DEBUG : run container_id:roottestbackuprestoreonclustercancelbackup-gw5-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2051, exec_in_container) 2025-07-28 10:02:15.934000 [ 616 ] DEBUG : Command:[docker exec roottestbackuprestoreonclustercancelbackup-gw5-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:16.007000 [ 616 ] DEBUG : Stdout:839 (cluster.py:145, run_and_check) 2025-07-28 10:02:16.007000 [ 616 ] DEBUG : Executing query select 20 on node2 (cluster.py:3648, query) 2025-07-28 10:02:16.825000 [ 616 ] DEBUG : Executing query select 20 on node2 (cluster.py:3648, query) 2025-07-28 10:02:17.643000 [ 616 ] DEBUG : Executing query select 20 on node2 (cluster.py:3648, query) 2025-07-28 10:02:18.512000 [ 616 ] DEBUG : Executing query select 20 on node2 (cluster.py:3648, query) 2025-07-28 10:02:18.880000 [ 616 ] DEBUG : Executing query SELECT count() FROM system.processes WHERE (query_kind='Backup') AND (query LIKE '%c50171435ebf41dd9af29708b6536513%') on node1 (cluster.py:3648, query) 2025-07-28 10:02:19.298000 [ 616 ] DEBUG : Executing query SELECT count() FROM system.processes WHERE (query_kind='Backup') AND (query LIKE '%c50171435ebf41dd9af29708b6536513%') on node2 (cluster.py:3648, query) 2025-07-28 10:02:19.715000 [ 616 ] DEBUG : Executing query SELECT status FROM system.backups WHERE id='c50171435ebf41dd9af29708b6536513' on node1 (cluster.py:3648, query) 2025-07-28 10:02:20.133000 [ 616 ] DEBUG : Executing query SELECT error FROM system.backups WHERE id='c50171435ebf41dd9af29708b6536513' on node1 (cluster.py:3648, query) 2025-07-28 10:02:20.550000 [ 616 ] DEBUG : Executing query SYSTEM FLUSH LOGS on node1 (cluster.py:3648, query) 2025-07-28 10:02:21.470000 [ 616 ] DEBUG : Executing query SELECT status FROM system.backup_log WHERE id='c50171435ebf41dd9af29708b6536513' ORDER BY status on node1 (cluster.py:3648, query) 2025-07-28 10:02:21.938000 [ 616 ] DEBUG : Executing query SELECT name FROM system.zookeeper WHERE path = '/clickhouse/backups' AND NOT (name == 'alive_tracker') on node1 (cluster.py:3648, query) 2025-07-28 10:02:22.304000 [ 616 ] DEBUG : Executing query SELECT name FROM system.errors WHERE last_error_time >= toDateTime('2025-07-28 10:02:01') AND NOT ((name == 'KEEPER_EXCEPTION') AND (last_error_message LIKE '%Fault injection%')) AND NOT (name == 'NO_ELEMENTS_IN_CONFIG') on node1 (cluster.py:3648, query) 2025-07-28 10:02:22.722000 [ 616 ] DEBUG : Executing query SELECT name, last_error_message FROM system.errors WHERE last_error_time >= toDateTime('2025-07-28 10:02:01') on node1 (cluster.py:3648, query) 2025-07-28 10:02:23.089000 [ 616 ] DEBUG : Executing query SELECT name FROM system.errors WHERE last_error_time >= toDateTime('2025-07-28 10:02:01') AND NOT ((name == 'KEEPER_EXCEPTION') AND (last_error_message LIKE '%Fault injection%')) AND NOT (name == 'NO_ELEMENTS_IN_CONFIG') on node2 (cluster.py:3648, query) 2025-07-28 10:02:23.456000 [ 616 ] DEBUG : Executing query SELECT name, last_error_message FROM system.errors WHERE last_error_time >= toDateTime('2025-07-28 10:02:01') on node2 (cluster.py:3648, query) ---------------------------- Captured log teardown ----------------------------- 2025-07-28 10:02:23.936000 [ 616 ] DEBUG : Executing query DROP TABLE IF EXISTS tbl ON CLUSTER 'cluster' SYNC on node1 (cluster.py:3648, query) 2025-07-28 10:02:24.404000 [ 616 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/.env --project-name roottestbackuprestoreonclustercancelbackup-gw5 --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node2/docker-compose.yml stop --timeout 20] (cluster.py:121, run_and_check) 2025-07-28 10:02:26.015000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node1-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:02:26.015000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node2-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:02:26.015000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node2-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:02:26.015000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node1-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:02:26.015000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:02:26.016000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:02:26.016000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:02:26.016000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:02:26.016000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:02:26.016000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:02:26.016000 [ 616 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-07-28 10:02:26.033000 [ 616 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-07-28 10:02:26.051000 [ 616 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/.env --project-name roottestbackuprestoreonclustercancelbackup-gw5 --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_backup_restore_on_cluster/_instances-cancel_backup-1-gw5/node2/docker-compose.yml down --volumes] (cluster.py:121, run_and_check) 2025-07-28 10:02:26.645000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node1-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:02:26.645000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node2-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:02:26.645000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node1-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:02:26.645000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node1-1 Removing (cluster.py:147, run_and_check) 2025-07-28 10:02:26.646000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node2-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:02:26.646000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node2-1 Removing (cluster.py:147, run_and_check) 2025-07-28 10:02:26.646000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node1-1 Removed (cluster.py:147, run_and_check) 2025-07-28 10:02:26.646000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-node2-1 Removed (cluster.py:147, run_and_check) 2025-07-28 10:02:26.646000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:02:26.646000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:02:26.646000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:02:26.646000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:02:26.647000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Removing (cluster.py:147, run_and_check) 2025-07-28 10:02:26.647000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:02:26.647000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Removing (cluster.py:147, run_and_check) 2025-07-28 10:02:26.647000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:02:26.647000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Removing (cluster.py:147, run_and_check) 2025-07-28 10:02:26.647000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo1-1 Removed (cluster.py:147, run_and_check) 2025-07-28 10:02:26.648000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo2-1 Removed (cluster.py:147, run_and_check) 2025-07-28 10:02:26.648000 [ 616 ] DEBUG : Stderr: Container roottestbackuprestoreonclustercancelbackup-gw5-zoo3-1 Removed (cluster.py:147, run_and_check) 2025-07-28 10:02:26.648000 [ 616 ] DEBUG : Stderr: Network roottestbackuprestoreonclustercancelbackup-gw5_default Removing (cluster.py:147, run_and_check) 2025-07-28 10:02:26.648000 [ 616 ] DEBUG : Stderr: Network roottestbackuprestoreonclustercancelbackup-gw5_default Removed (cluster.py:147, run_and_check) 2025-07-28 10:02:26.649000 [ 616 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-07-28 10:02:26.675000 [ 616 ] DEBUG : Docker networks for project roottestbackuprestoreonclustercancelbackup-gw5 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-07-28 10:02:26.706000 [ 616 ] DEBUG : Docker containers for project roottestbackuprestoreonclustercancelbackup-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-07-28 10:02:26.734000 [ 616 ] DEBUG : Docker volumes for project roottestbackuprestoreonclustercancelbackup-gw5 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-07-28 10:02:26.735000 [ 616 ] DEBUG : Command:[docker container list --all --filter name='^/roottestbackuprestoreonclustercancelbackup-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-07-28 10:02:26.765000 [ 616 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-07-28 10:02:26.765000 [ 616 ] DEBUG : No running containers for project: roottestbackuprestoreonclustercancelbackup-gw5 (cluster.py:879, cleanup) 2025-07-28 10:02:26.765000 [ 616 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-07-28 10:02:26.797000 [ 616 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-07-28 10:02:26.798000 [ 616 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-07-28 10:02:26.843000 [ 616 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-07-28 10:02:26.844000 [ 616 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-07-28 10:02:26.844000 [ 616 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-07-28 10:02:26.844000 [ 616 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-07-28 10:02:26.876000 [ 616 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-07-28 10:02:26.876000 [ 616 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) ____________________ test_cow_policy[cow_policy_multi_disk] ____________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = storage_policy = 'cow_policy_multi_disk' @pytest.mark.parametrize("storage_policy", ["cow_policy_multi_disk", "cow_policy_multi_volume"]) def test_cow_policy(start_cluster, storage_policy): try: > node.query_with_retry( f""" ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = '{storage_policy}' """, timeout=60, retry_count=3, ) test_cow_policy/test.py:24: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n ...R BY (postcode1, postcode2, addr1, addr2)\n SETTINGS storage_policy = 'cow_policy_multi_disk'\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f1fbcb08160> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_disk' E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.6): E Code: 198. DB::Exception: Received from 172.16.1.2:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x00000000382e5051 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bd54ed1 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000001bcdf4af E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x000000001bcd865e E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000001bcd55e8 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000001bcd6020 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x000000001bcd6d67 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000001c4f0840 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c4ecfc1 E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x000000001c4eca8d E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000001c4ec454 E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000001c4f5cb0 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000001c4f582d E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x000000001c4f062b E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c4d7d88 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c4d6110 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000001c5005d4 E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000021207352 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x00000000212079dc E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x0000000021208a5b E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000002120e378 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x00000000212033d1 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000002120b083 E 23. DB::ReadBuffer::next() @ 0x000000000c5cc20b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x000000002864bb82 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x00000000286504df E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000002864fe10 E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000028647f46 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x0000000028553d3e E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:380: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000002eee6b10 E 30. ./build_docker/./src/Storages/StorageMergeTree.cpp:159: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x000000002f698f96 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, 0>(std::allocator const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&) @ 0x000000002f6985f6 E . (DNS_ERROR) E (query: ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_disk' E ) helpers/cluster.py:3712: Exception ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ------------------------------ Captured log setup ------------------------------ 2025-07-28 10:01:39.204000 [ 604 ] DEBUG : Command:[docker ps | wc -l] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.228000 [ 604 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-07-28 10:01:39.228000 [ 604 ] DEBUG : No running containers (conftest.py:95, cleanup_environment) 2025-07-28 10:01:39.228000 [ 604 ] DEBUG : Pruning Docker networks (conftest.py:97, cleanup_environment) 2025-07-28 10:01:39.228000 [ 604 ] DEBUG : Command:[docker network prune --force] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.259000 [ 604 ] DEBUG : Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.264000 [ 604 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:145, run_and_check) 2025-07-28 10:01:39.265000 [ 604 ] INFO : Running tests in /ClickHouse/tests/integration/test_cow_policy/test.py (cluster.py:2738, start) 2025-07-28 10:01:39.265000 [ 604 ] DEBUG : Cluster start called. is_up=False (cluster.py:2745, start) 2025-07-28 10:01:39.297000 [ 604 ] DEBUG : Docker networks for project roottestcowpolicy-gw1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-07-28 10:01:39.328000 [ 604 ] DEBUG : Docker containers for project roottestcowpolicy-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-07-28 10:01:39.358000 [ 604 ] DEBUG : Docker volumes for project roottestcowpolicy-gw1 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-07-28 10:01:39.358000 [ 604 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-07-28 10:01:39.388000 [ 604 ] DEBUG : Docker networks for project roottestcowpolicy-gw1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-07-28 10:01:39.418000 [ 604 ] DEBUG : Docker containers for project roottestcowpolicy-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-07-28 10:01:39.447000 [ 604 ] DEBUG : Docker volumes for project roottestcowpolicy-gw1 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-07-28 10:01:39.447000 [ 604 ] DEBUG : Command:[docker container list --all --filter name='^/roottestcowpolicy-gw1-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.478000 [ 604 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-07-28 10:01:39.479000 [ 604 ] DEBUG : No running containers for project: roottestcowpolicy-gw1 (cluster.py:879, cleanup) 2025-07-28 10:01:39.479000 [ 604 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-07-28 10:01:39.513000 [ 604 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-07-28 10:01:39.513000 [ 604 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.544000 [ 604 ] DEBUG : Stderr:Error response from daemon: a prune operation is already running (cluster.py:147, run_and_check) 2025-07-28 10:01:39.544000 [ 604 ] DEBUG : Exitcode:1 (cluster.py:149, run_and_check) 2025-07-28 10:01:39.544000 [ 604 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-07-28 10:01:39.545000 [ 604 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-07-28 10:01:39.575000 [ 604 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-07-28 10:01:39.575000 [ 604 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) 2025-07-28 10:01:39.575000 [ 604 ] DEBUG : Setup directory for instance: node (cluster.py:2758, start) 2025-07-28 10:01:39.576000 [ 604 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4628, create_dir) 2025-07-28 10:01:39.577000 [ 604 ] DEBUG : Create directory for common tests configuration (cluster.py:4633, create_dir) 2025-07-28 10:01:39.577000 [ 604 ] DEBUG : Copy common configuration from helpers (cluster.py:4653, create_dir) 2025-07-28 10:01:39.578000 [ 604 ] DEBUG : Generate and write macros file (cluster.py:4705, create_dir) 2025-07-28 10:01:39.578000 [ 604 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_cow_policy/configs/overrides.yaml'] to /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/node/configs/config.d (cluster.py:4741, create_dir) 2025-07-28 10:01:39.579000 [ 604 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/node/database (cluster.py:4758, create_dir) 2025-07-28 10:01:39.580000 [ 604 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/node/logs (cluster.py:4769, create_dir) 2025-07-28 10:01:39.580000 [ 604 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4850, create_dir) 2025-07-28 10:01:39.580000 [ 604 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/.env (cluster.py:96, _create_env_file) 2025-07-28 10:01:39.581000 [ 604 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-07-28 10:01:39.581000 [ 604 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-07-28 10:01:39.581000 [ 604 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-07-28 10:01:39.582000 [ 604 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-07-28 10:01:39.596000 [ 604 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-07-28 10:01:39.597000 [ 604 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/.env --project-name roottestcowpolicy-gw1 --file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/node/docker-compose.yml pull] (cluster.py:121, run_and_check) 2025-07-28 10:01:50.094000 [ 604 ] DEBUG : Stderr: node Pulling (cluster.py:147, run_and_check) 2025-07-28 10:01:50.094000 [ 604 ] DEBUG : Stderr: node Pulled (cluster.py:147, run_and_check) 2025-07-28 10:01:50.094000 [ 604 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/.env --project-name roottestcowpolicy-gw1 --file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/node/docker-compose.yml up -d --no-recreate') (cluster.py:3139, start) 2025-07-28 10:01:50.094000 [ 604 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/.env --project-name roottestcowpolicy-gw1 --file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/node/docker-compose.yml up -d --no-recreate] (cluster.py:121, run_and_check) 2025-07-28 10:01:50.801000 [ 604 ] DEBUG : Stderr: Network roottestcowpolicy-gw1_default Creating (cluster.py:147, run_and_check) 2025-07-28 10:01:50.801000 [ 604 ] DEBUG : Stderr: Network roottestcowpolicy-gw1_default Created (cluster.py:147, run_and_check) 2025-07-28 10:01:50.802000 [ 604 ] DEBUG : Stderr: Container roottestcowpolicy-gw1-node-1 Creating (cluster.py:147, run_and_check) 2025-07-28 10:01:50.802000 [ 604 ] DEBUG : Stderr: Container roottestcowpolicy-gw1-node-1 Created (cluster.py:147, run_and_check) 2025-07-28 10:01:50.802000 [ 604 ] DEBUG : Stderr: Container roottestcowpolicy-gw1-node-1 Starting (cluster.py:147, run_and_check) 2025-07-28 10:01:50.802000 [ 604 ] DEBUG : Stderr: Container roottestcowpolicy-gw1-node-1 Started (cluster.py:147, run_and_check) 2025-07-28 10:01:50.802000 [ 604 ] DEBUG : ClickHouse instance created (cluster.py:3147, start) 2025-07-28 10:01:50.802000 [ 604 ] DEBUG : get_instance_ip instance_name=node (cluster.py:2005, get_instance_ip) 2025-07-28 10:01:50.806000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw1-node-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:50.807000 [ 604 ] DEBUG : get_instance_ip instance_name=node (cluster.py:2015, get_instance_global_ipv6) 2025-07-28 10:01:50.810000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw1-node-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:50.811000 [ 604 ] DEBUG : Waiting for ClickHouse start in node, ip: 172.16.1.2... (cluster.py:3155, start) 2025-07-28 10:01:50.814000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw1-node-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:50.818000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:50.923000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.027000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.131000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.235000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.340000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.444000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.548000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.652000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.757000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.862000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:51.966000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:52.070000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:52.175000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:52.279000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:52.384000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:52.488000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:52.592000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:52.696000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:52.800000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:52.904000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:53.008000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:53.112000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:53.217000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:53.321000 [ 604 ] DEBUG : http://localhost:None "GET /v1.46/containers/9a9433ce1137d15386f20ebf3a890358ef1ac564854910cbc493f5ccb6c91b04/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-07-28 10:01:53.322000 [ 604 ] DEBUG : ClickHouse node started (cluster.py:3159, start) ------------------------------ Captured log call ------------------------------- 2025-07-28 10:01:53.324000 [ 604 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node (cluster.py:3648, query) 2025-07-28 10:02:47.411000 [ 604 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node (cluster.py:3648, query) 2025-07-28 10:03:42.014000 [ 604 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node (cluster.py:3648, query) 2025-07-28 10:04:37.223000 [ 604 ] DEBUG : Executing query DROP TABLE IF EXISTS uk_price_paid SYNC on node (cluster.py:3648, query) ___________________ test_cow_policy[cow_policy_multi_volume] ___________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = storage_policy = 'cow_policy_multi_volume' @pytest.mark.parametrize("storage_policy", ["cow_policy_multi_disk", "cow_policy_multi_volume"]) def test_cow_policy(start_cluster, storage_policy): try: > node.query_with_retry( f""" ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = '{storage_policy}' """, timeout=60, retry_count=3, ) test_cow_policy/test.py:24: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n ...BY (postcode1, postcode2, addr1, addr2)\n SETTINGS storage_policy = 'cow_policy_multi_volume'\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f1fbcb08160> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_volume' E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.6): E Code: 198. DB::Exception: Received from 172.16.1.2:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x00000000382e5051 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bd54ed1 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000001bcdf4af E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x000000001bcd865e E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000001bcd55e8 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000001bcd6020 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x000000001bcd6d67 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000001c4f0840 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c4ecfc1 E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x000000001c4eca8d E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000001c4ec454 E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000001c4f5cb0 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000001c4f582d E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x000000001c4f062b E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c4d7d88 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c4d6110 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000001c5005d4 E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000021207352 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x00000000212079dc E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x0000000021208a5b E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000002120e378 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x00000000212033d1 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000002120b083 E 23. DB::ReadBuffer::next() @ 0x000000000c5cc20b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x000000002864bb82 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x00000000286504df E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000002864fe10 E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000028647f46 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x0000000028553d3e E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:380: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000002eee6b10 E 30. ./build_docker/./src/Storages/StorageMergeTree.cpp:159: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x000000002f698f96 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, 0>(std::allocator const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&) @ 0x000000002f6985f6 E . (DNS_ERROR) E (query: ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_volume' E ) helpers/cluster.py:3712: Exception ------------------------------ Captured log call ------------------------------- 2025-07-28 10:04:37.924000 [ 604 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node (cluster.py:3648, query) 2025-07-28 10:05:34.029000 [ 604 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node (cluster.py:3648, query) 2025-07-28 10:06:31.946000 [ 604 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node (cluster.py:3648, query) 2025-07-28 10:07:26.556000 [ 604 ] DEBUG : Executing query DROP TABLE IF EXISTS uk_price_paid SYNC on node (cluster.py:3648, query) ---------------------------- Captured log teardown ----------------------------- 2025-07-28 10:07:27.092000 [ 604 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/.env --project-name roottestcowpolicy-gw1 --file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/node/docker-compose.yml stop --timeout 20] (cluster.py:121, run_and_check) 2025-07-28 10:07:33.106000 [ 604 ] DEBUG : Stderr: Container roottestcowpolicy-gw1-node-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:07:33.106000 [ 604 ] DEBUG : Stderr: Container roottestcowpolicy-gw1-node-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:07:33.106000 [ 604 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-07-28 10:07:33.124000 [ 604 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/.env --project-name roottestcowpolicy-gw1 --file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw1/node/docker-compose.yml down --volumes] (cluster.py:121, run_and_check) 2025-07-28 10:07:33.651000 [ 604 ] DEBUG : Stderr: Container roottestcowpolicy-gw1-node-1 Stopping (cluster.py:147, run_and_check) 2025-07-28 10:07:33.651000 [ 604 ] DEBUG : Stderr: Container roottestcowpolicy-gw1-node-1 Stopped (cluster.py:147, run_and_check) 2025-07-28 10:07:33.651000 [ 604 ] DEBUG : Stderr: Container roottestcowpolicy-gw1-node-1 Removing (cluster.py:147, run_and_check) 2025-07-28 10:07:33.651000 [ 604 ] DEBUG : Stderr: Container roottestcowpolicy-gw1-node-1 Removed (cluster.py:147, run_and_check) 2025-07-28 10:07:33.651000 [ 604 ] DEBUG : Stderr: Network roottestcowpolicy-gw1_default Removing (cluster.py:147, run_and_check) 2025-07-28 10:07:33.651000 [ 604 ] DEBUG : Stderr: Network roottestcowpolicy-gw1_default Removed (cluster.py:147, run_and_check) 2025-07-28 10:07:33.652000 [ 604 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-07-28 10:07:33.683000 [ 604 ] DEBUG : Docker networks for project roottestcowpolicy-gw1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-07-28 10:07:33.718000 [ 604 ] DEBUG : Docker containers for project roottestcowpolicy-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-07-28 10:07:33.751000 [ 604 ] DEBUG : Docker volumes for project roottestcowpolicy-gw1 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-07-28 10:07:33.752000 [ 604 ] DEBUG : Command:[docker container list --all --filter name='^/roottestcowpolicy-gw1-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-07-28 10:07:33.783000 [ 604 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-07-28 10:07:33.783000 [ 604 ] DEBUG : No running containers for project: roottestcowpolicy-gw1 (cluster.py:879, cleanup) 2025-07-28 10:07:33.783000 [ 604 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-07-28 10:07:33.817000 [ 604 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-07-28 10:07:33.817000 [ 604 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-07-28 10:07:33.859000 [ 604 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-07-28 10:07:33.860000 [ 604 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-07-28 10:07:33.860000 [ 604 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-07-28 10:07:33.860000 [ 604 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-07-28 10:07:33.892000 [ 604 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-07-28 10:07:33.892000 [ 604 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) ----------------- generated report log file: parallel0_1.jsonl ----------------- ============================== slowest durations =============================== 169.05s call test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] 164.27s call test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] 23.38s call test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup 21.24s setup test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup 14.12s setup test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] 6.80s teardown test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] 2.94s teardown test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup 0.00s setup test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] 0.00s teardown test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] =========================== short test summary info ============================ FAILED test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] - Exce... FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] - Ex... ======================== 3 failed in 357.74s (0:05:57) ========================= Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 492, in subprocess.check_call(cmd, shell=True, bufsize=0) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_gezf6p --privileged --dns-search='.' --memory=30709022720 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=5ccda723c1fc -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=d862517635bf -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=1 --color=no --durations=0 --report-log=parallel0_1.jsonl --report-log-exclude-logs-on-passed-tests test_backup_restore_on_cluster/test_cancel_backup.py::test_shutdown_cancels_backup 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk]' 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume]' -vvv " altinityinfra/integration-tests-runner:226bfaf75ac1 ' returned non-zero exit status 1.